A Second Derivative SQP Method: Global Convergence

نویسندگان

  • Nicholas I. M. Gould
  • Daniel P. Robinson
چکیده

Gould and Robinson (NAR 08/18, Oxford University Computing Laboratory, 2008) gave global convergence results for a second-derivative SQP method for minimizing the exact l1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty parameter based on approximately minimizing the l1-penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Second Derivative Sqp Method with Imposed

Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding th...

متن کامل

A Second Derivative Sqp Method : Theoretical Issues

Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding th...

متن کامل

A Second Derivative Sqp Method : Local Convergence

In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact l1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were ...

متن کامل

A Second-derivative Trust-region Sqp Method with a “trust-region-free” Predictor Step

In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly ...

متن کامل

A Modified Limited SQP Method For Constrained Optimization

In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover, the proposed method requires no more function or derivative evaluations and hardly more storage or arithmetic operations. Under suitable conditions, the global convergence is esta...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 20  شماره 

صفحات  -

تاریخ انتشار 2010